You can move objects between Amazon S3 storage classes using automated Lifecycle rules for time-based transitions, or using manual methods like copy operations (CLI/SDK) and S3 Batch Operations for one-time or custom moves.
Amazon S3 offers multiple ways to move objects between storage classes, each designed for different use cases. The most common approach is using S3 Lifecycle rules to automatically transition objects based on age. For one-time bulk moves, S3 Batch Operations can process millions of objects using a manifest. For ad-hoc changes to individual objects, the copy operation can be used to overwrite an object in-place with a new storage class.
S3 Lifecycle rules allow you to automatically transition objects to cheaper storage classes as they age. This is the preferred method for ongoing data management where you want to optimize costs without manual intervention.
Open the Amazon S3 console and navigate to your bucket
Click the Management tab, then select Create lifecycle rule
Name your rule and choose the scope (entire bucket or filter by prefix/tags)
Under Lifecycle rule actions, select Move current versions of objects between storage classes
Add transitions with the number of days after creation. For example, move to Standard-IA after 30 days, then to Glacier after 90 days
Click Create rule
Amazon S3 supports a waterfall model for transitioning between storage classes using Lifecycle rules. You can only transition to "colder" tiers in the direction shown, not backwards (e.g., you cannot transition from Glacier back to Standard using Lifecycle alone).
S3 Standard → Any other storage class
S3 Standard-IA → Intelligent-Tiering, One Zone-IA, Glacier Instant Retrieval, Glacier Flexible Retrieval, Glacier Deep Archive
S3 Intelligent-Tiering → One Zone-IA, Glacier Instant Retrieval, Glacier Flexible Retrieval, Glacier Deep Archive
S3 One Zone-IA → Glacier Flexible Retrieval, Glacier Deep Archive
S3 Glacier Instant Retrieval → Glacier Flexible Retrieval, Glacier Deep Archive
S3 Glacier Flexible Retrieval → Glacier Deep Archive only
Note: Objects smaller than 128 KB will not transition by default to any storage class, as the request cost outweighs the storage savings for small objects. You can override this with an object size filter if needed.
For immediate changes to existing objects, you can use the copy operation to overwrite an object in-place with a new storage class. This works because objects in S3 are immutable—you're creating a new version with the updated class.
This approach doesn't duplicate your data but does count as a new PUT operation, which may incur some fees. The storage class change isn't immediate and may take some time to complete.
For moving millions of objects, especially when you need complex filtering or when the desired transition isn't supported by Lifecycle rules, S3 Batch Operations is the recommended approach.
Generate a manifest - Create an S3 Inventory report listing all objects to be processed (takes up to 48 hours for the first report)
Create a Batch Operations job - Using the manifest, specify the operation (e.g., copy objects with new storage class)
Configure IAM permissions - Ensure the job has necessary permissions to read source and write destination
Run the job - Execute and monitor progress; S3 tracks status and provides a completion report
Batch Operations supports objects up to 5 GB per copy. For larger objects, you need a separate strategy using Lambda functions. If your objects are in Glacier storage classes, you must first run a restore Batch Operation job before copying them.
Minimum storage durations - Classes like Standard-IA and One Zone-IA require 30 days of storage. Transitioning earlier incurs charges for the remainder of the minimum duration.
Transition costs - You are charged transition requests (per object) when moving between classes. For smaller objects, these costs can outweigh storage savings.
Encrypted objects - Remain encrypted throughout the transition process.
Archived objects - Objects in Glacier classes are not available for real-time access. You must restore them before they can be read or copied.
Versioned buckets - If versioning is enabled, copy operations create new versions. Lifecycle rules can manage current and noncurrent versions separately.